29 research outputs found
SCOR: Software-defined Constrained Optimal Routing Platform for SDN
A Software-defined Constrained Optimal Routing (SCOR) platform is introduced
as a Northbound interface in SDN architecture. It is based on constraint
programming techniques and is implemented in MiniZinc modelling language. Using
constraint programming techniques in this Northbound interface has created an
efficient tool for implementing complex Quality of Service routing applications
in a few lines of code. The code includes only the problem statement and the
solution is found by a general solver program. A routing framework is
introduced based on SDN's architecture model which uses SCOR as its Northbound
interface and an upper layer of applications implemented in SCOR. Performance
of a few implemented routing applications are evaluated in different network
topologies, network sizes and various number of concurrent flows.Comment: 19 pages, 11 figures, 11 algorithms, 3 table
Network Intrusion Detection System in a Light Bulb
Internet of Things (IoT) devices are progressively being utilised in a
variety of edge applications to monitor and control home and industry
infrastructure. Due to the limited compute and energy resources, active
security protections are usually minimal in many IoT devices. This has created
a critical security challenge that has attracted researchers' attention in the
field of network security. Despite a large number of proposed Network Intrusion
Detection Systems (NIDSs), there is limited research into practical IoT
implementations, and to the best of our knowledge, no edge-based NIDS has been
demonstrated to operate on common low-power chipsets found in the majority of
IoT devices, such as the ESP8266. This research aims to address this gap by
pushing the boundaries on low-power Machine Learning (ML) based NIDSs. We
propose and develop an efficient and low-power ML-based NIDS, and demonstrate
its applicability for IoT edge applications by running it on a typical smart
light bulb. We also evaluate our system against other proposed edge-based NIDSs
and show that our model has a higher detection performance, and is
significantly faster and smaller, and therefore more applicable to a wider
range of IoT edge devices
Towards a Standard Feature Set of NIDS Datasets
Network Intrusion Detection Systems (NIDSs) datasets are essential tools used
by researchers for the training and evaluation of Machine Learning (ML)-based
NIDS models. There are currently five datasets, known as NF-UNSW-NB15,
NF-BoT-IoT, NF-ToN-IoT, NF-CSE-CIC-IDS2018 and NF-UQ-NIDS, which are made up of
a common feature set. However, their performances in classifying network
traffic, mainly using the multi-classification method, is often unreliable.
Therefore, this paper proposes a standard NetFlow feature set, to be used in
future NIDS datasets due to the tremendous benefits of having a common feature
set. NetFlow has been widely utilised in the networking industry for its
practical scaling properties. The evaluation is done by extracting and labeling
the proposed features from four well-known datasets. The newly generated
datasets are known as NF- UNSW-NB15-v2, NF-BoT-IoT-v2, NF-ToN-IoT-v2,
NF-CSE-CIC-IDS2018-v2 and NF-UQ-NIDS-v2. Their performances have been compared
to their respective original datasets using an Extra Trees classifier, showing
a great improvement in the attack detection accuracy. They have been made
publicly available to use for research purposes.Comment: 13 pages, 4 figures, 13 tables. arXiv admin note: substantial text
overlap with arXiv:2011.0914
From Zero-Shot Machine Learning to Zero-Day Attack Detection
The standard ML methodology assumes that the test samples are derived from a
set of pre-observed classes used in the training phase. Where the model
extracts and learns useful patterns to detect new data samples belonging to the
same data classes. However, in certain applications such as Network Intrusion
Detection Systems, it is challenging to obtain data samples for all attack
classes that the model will most likely observe in production. ML-based NIDSs
face new attack traffic known as zero-day attacks, that are not used in the
training of the learning models due to their non-existence at the time. In this
paper, a zero-shot learning methodology has been proposed to evaluate the ML
model performance in the detection of zero-day attack scenarios. In the
attribute learning stage, the ML models map the network data features to
distinguish semantic attributes from known attack (seen) classes. In the
inference stage, the models are evaluated in the detection of zero-day attack
(unseen) classes by constructing the relationships between known attacks and
zero-day attacks. A new metric is defined as Zero-day Detection Rate, which
measures the effectiveness of the learning model in the inference stage. The
results demonstrate that while the majority of the attack classes do not
represent significant risks to organisations adopting an ML-based NIDS in a
zero-day attack scenario. However, for certain attack groups identified in this
paper, such systems are not effective in applying the learnt attributes of
attack behaviour to detect them as malicious. Further Analysis was conducted
using the Wasserstein Distance technique to measure how different such attacks
are from other attack types used in the training of the ML model. The results
demonstrate that sophisticated attacks with a low zero-day detection rate have
a significantly distinct feature distribution compared to the other attack
classes
A Cyber Threat Intelligence Sharing Scheme based on Federated Learning for Network Intrusion Detection
The uses of Machine Learning (ML) in detection of network attacks have been
effective when designed and evaluated in a single organisation. However, it has
been very challenging to design an ML-based detection system by utilising
heterogeneous network data samples originating from several sources. This is
mainly due to privacy concerns and the lack of a universal format of datasets.
In this paper, we propose a collaborative federated learning scheme to address
these issues. The proposed framework allows multiple organisations to join
forces in the design, training, and evaluation of a robust ML-based network
intrusion detection system. The threat intelligence scheme utilises two
critical aspects for its application; the availability of network data traffic
in a common format to allow for the extraction of meaningful patterns across
data sources. Secondly, the adoption of a federated learning mechanism to avoid
the necessity of sharing sensitive users' information between organisations. As
a result, each organisation benefits from other organisations cyber threat
intelligence while maintaining the privacy of its data internally. The model is
trained locally and only the updated weights are shared with the remaining
participants in the federated averaging process. The framework has been
designed and evaluated in this paper by using two key datasets in a NetFlow
format known as NF-UNSW-NB15-v2 and NF-BoT-IoT-v2. Two other common scenarios
are considered in the evaluation process; a centralised training method where
the local data samples are shared with other organisations and a localised
training method where no threat intelligence is shared. The results demonstrate
the efficiency and effectiveness of the proposed framework by designing a
universal ML model effectively classifying benign and intrusive traffic
originating from multiple organisations without the need for local data
exchange
Anomal-E: A Self-Supervised Network Intrusion Detection System based on Graph Neural Networks
This paper investigates Graph Neural Networks (GNNs) application for
self-supervised network intrusion and anomaly detection. GNNs are a deep
learning approach for graph-based data that incorporate graph structures into
learning to generalise graph representations and output embeddings. As network
flows are naturally graph-based, GNNs are a suitable fit for analysing and
learning network behaviour. The majority of current implementations of
GNN-based Network Intrusion Detection Systems (NIDSs) rely heavily on labelled
network traffic which can not only restrict the amount and structure of input
traffic, but also the NIDSs potential to adapt to unseen attacks. To overcome
these restrictions, we present Anomal-E, a GNN approach to intrusion and
anomaly detection that leverages edge features and graph topological structure
in a self-supervised process. This approach is, to the best our knowledge, the
first successful and practical approach to network intrusion detection that
utilises network flows in a self-supervised, edge leveraging GNN. Experimental
results on two modern benchmark NIDS datasets not only clearly display the
improvement of using Anomal-E embeddings rather than raw features, but also the
potential Anomal-E has for detection on wild network traffic
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection and Forensics
In this paper, we proposed XG-BoT, an explainable deep graph neural network
model for botnet node detection. The proposed model is mainly composed of a
botnet detector and an explainer for automatic forensics. The XG-BoT detector
can effectively detect malicious botnet nodes under large-scale networks.
Specifically, it utilizes a grouped reversible residual connection with a graph
isomorphism network to learn expressive node representations from the botnet
communication graphs. The explainer in XG-BoT can perform automatic network
forensics by highlighting suspicious network flows and related botnet nodes. We
evaluated XG-BoT on real-world, large-scale botnet network graphs. Overall,
XG-BoT is able to outperform the state-of-the-art in terms of evaluation
metrics. In addition, we show that the XG-BoT explainer can generate useful
explanations based on GNNExplainer for automatic network forensics.Comment: 6 pages, 3 figure
Exploring Edge TPU for Network Intrusion Detection in IoT
This paper explores Google's Edge TPU for implementing a practical network
intrusion detection system (NIDS) at the edge of IoT, based on a deep learning
approach. While there are a significant number of related works that explore
machine learning based NIDS for the IoT edge, they generally do not consider
the issue of the required computational and energy resources. The focus of this
paper is the exploration of deep learning-based NIDS at the edge of IoT, and in
particular the computational and energy efficiency. In particular, the paper
studies Google's Edge TPU as a hardware platform, and considers the following
three key metrics: computation (inference) time, energy efficiency and the
traffic classification performance. Various scaled model sizes of two major
deep neural network architectures are used to investigate these three metrics.
The performance of the Edge TPU-based implementation is compared with that of
an energy efficient embedded CPU (ARM Cortex A53). Our experimental evaluation
shows some unexpected results, such as the fact that the CPU significantly
outperforms the Edge TPU for small model sizes.Comment: 22 pages, 11 figure
E-GraphSAGE: A Graph Neural Network based Intrusion Detection System for IoT
This paper presents a new Network Intrusion Detection System (NIDS) based on
Graph Neural Networks (GNNs). GNNs are a relatively new sub-field of deep
neural networks, which can leverage the inherent structure of graph-based data.
Training and evaluation data for NIDSs are typically represented as flow
records, which can naturally be represented in a graph format. This establishes
the potential and motivation for exploring GNNs for network intrusion
detection, which is the focus of this paper. Current studies on machine
learning-based NIDSs only consider the network flows independently rather than
taking their interconnected patterns into consideration. This is the key
limitation in the detection of sophisticated IoT network attacks such as DDoS
and distributed port scan attacks launched by IoT devices. In this paper, we
propose \mbox{E-GraphSAGE}, a GNN approach that overcomes this limitation and
allows capturing both the edge features of a graph as well as the topological
information for network anomaly detection in IoT networks. To the best of our
knowledge, our approach is the first successful, practical, and extensively
evaluated approach of applying Graph Neural Networks on the problem of network
intrusion detection for IoT using flow-based data. Our extensive experimental
evaluation on four recent NIDS benchmark datasets shows that our approach
outperforms the state-of-the-art in terms of key classification metrics, which
demonstrates the potential of GNNs in network intrusion detection, and provides
motivation for further research.Comment: 9 pages, 5 figures, 6 table